Motivation: Enhancers are important cis-regulatory elements that regulate a wide range of biological functions and enhance the transcription of target genes. Although many state-of-the-art computational methods have been proposed in order to efficiently identify enhancers, learning globally contextual features is still one of the challenges for computational methods. Regarding the similarities between biological sequences and natural language sentences, the novel BERT-based language techniques have been applied to extracting complex contextual features in various computational biology tasks such as protein function/structure prediction. To speed up the research on enhancer identification, it is urgent to construct a BERT-based enhancer language model. Results: In this paper, we propose a multi-scale enhancer identification method (iEnhancer-ELM) based on enhancer language models, which treat enhancer sequences as natural language sentences that are composed of k-mer nucleotides. iEnhancer-ELM can extract contextual information of multi-scale k-mers with positions from raw enhancer sequences. Benefiting from the complementary information of k-mers in multi-scale, we ensemble four iEnhancer-ELM models for improving enhancer identification. The benchmark comparisons show that our model outperforms state-of-the-art methods. By the interpretable attention mechanism, we finds 30 biological patterns, where 40% (12/30) are verified by a widely used motif tool (STREME) and a popular dataset (JASPAR), demonstrating our model has a potential ability to reveal the biological mechanism of enhancer. Availability: The source code are available at https://github.com/chen-bioinfo/iEnhancer-ELM Contact: junjiechen@hit.edu.cn and junjie.chen.hit@gmail.com; Supplementary information: Supplementary data are available at Bioinformatics online.
translated by 谷歌翻译
Calibration strengthens the trustworthiness of black-box models by producing better accurate confidence estimates on given examples. However, little is known about if model explanations can help confidence calibration. Intuitively, humans look at important features attributions and decide whether the model is trustworthy. Similarly, the explanations can tell us when the model may or may not know. Inspired by this, we propose a method named CME that leverages model explanations to make the model less confident with non-inductive attributions. The idea is that when the model is not highly confident, it is difficult to identify strong indications of any class, and the tokens accordingly do not have high attribution scores for any class and vice versa. We conduct extensive experiments on six datasets with two popular pre-trained language models in the in-domain and out-of-domain settings. The results show that CME improves calibration performance in all settings. The expected calibration errors are further reduced when combined with temperature scaling. Our findings highlight that model explanations can help calibrate posterior estimates.
translated by 谷歌翻译
Pre-trained Language Models (PLMs) have been applied in NLP tasks and achieve promising results. Nevertheless, the fine-tuning procedure needs labeled data of the target domain, making it difficult to learn in low-resource and non-trivial labeled scenarios. To address these challenges, we propose Prompt-based Text Entailment (PTE) for low-resource named entity recognition, which better leverages knowledge in the PLMs. We first reformulate named entity recognition as the text entailment task. The original sentence with entity type-specific prompts is fed into PLMs to get entailment scores for each candidate. The entity type with the top score is then selected as final label. Then, we inject tagging labels into prompts and treat words as basic units instead of n-gram spans to reduce time complexity in generating candidates by n-grams enumeration. Experimental results demonstrate that the proposed method PTE achieves competitive performance on the CoNLL03 dataset, and better than fine-tuned counterparts on the MIT Movie and Few-NERD dataset in low-resource settings.
translated by 谷歌翻译
表问题回答(表QA)是指从表中提供精确的答案来回答用户的问题。近年来,在表质量检查方面有很多作品,但是对该研究主题缺乏全面的调查。因此,我们旨在提供表QA中可用数据集和代表性方法的概述。我们根据其技术将现有的表质量质量质量检查分为五个类别,其中包括基于语义的,生成,提取,基于匹配的基于匹配的方法和基于检索的方法。此外,由于表质量质量质量检查仍然是现有方法的一项艰巨的任务,因此我们还识别和概述了一些关键挑战,并讨论了表质量质量检查的潜在未来方向。
translated by 谷歌翻译
医学对话生成是一项重要但具有挑战性的任务。以前的大多数作品都依赖于注意力机制和大规模预处理的语言模型。但是,这些方法通常无法从长时间的对话历史中获取关键信息,从而产生准确和信息丰富的响应,因为医疗实体通常散布在多种话语中以及它们之间的复杂关系。为了减轻此问题,我们提出了一个具有关键信息召回(Medpir)的医疗响应生成模型,该模型建立在两个组件上,即知识吸引的对话图形编码器和召回增强的生成器。知识吸引的对话图编码器通过利用话语中的实体之间的知识关系,并使用图形注意力网络对话图来构建对话图。然后,召回增强的发电机通过在产生实际响应之前生成对话的摘要来增强这些关键信息的使用。两个大型医学对话数据集的实验结果表明,Medpir在BLEU分数和医疗实体F1度量中的表现优于强大的基准。
translated by 谷歌翻译
自动诊断引起了越来越多的关注,但由于多步推理,仍然挑战。最近的作品通常通过强化学习方法来解决它。但是,这些方法显示出低效率并要求Taskspecific奖励功能。考虑到医生与患者之间的谈话允许医生探讨症状并进行诊断,诊断过程可以自然地视为包括症状和诊断的序列的产生。受此启发,我们将自动诊断重构为症状序列生成(SG)任务,并提出了一种基于变压器(沟通器)的简单但有效的自动诊断模型。我们首先设计了症状关注框架,以了解症状查询和疾病诊断的产生。为了减轻序贯生成和隐含症状紊乱之间的差异,我们进一步设计了三种无价的培训机制。三个公共数据集的实验表明,我们的模型以最高的培训效率为1%,6%和11.5%的疾病诊断表现出基础。详细分析症状查询预测表明,应用症状序列生成自动诊断的可能性。
translated by 谷歌翻译
最近的作品表明了解释性和鲁棒性是值得信赖和可靠的文本分类的两个关键成分。然而,以前的作品通常是解决了两个方面的一个:i)如何提取准确的理由,以便在有利于预测的同时解释; ii)如何使预测模型对不同类型的对抗性攻击稳健。直观地,一种产生有用的解释的模型应该对对抗性攻击更加强大,因为我们无法信任输出解释的模型,而是在小扰动下改变其预测。为此,我们提出了一个名为-BMC的联合分类和理由提取模型。它包括两个关键机制:混合的对手训练(AT)旨在在离散和嵌入空间中使用各种扰动,以改善模型的鲁棒性,边界匹配约束(BMC)有助于利用边界信息的引导来定位理由。基准数据集的性能表明,所提出的AT-BMC优于分类和基本原子的基础,由大边距提取。鲁棒性分析表明,建议的AT-BMC将攻击成功率降低了高达69%。经验结果表明,强大的模型与更好的解释之间存在连接。
translated by 谷歌翻译
跨模型检索已成为仅限文本搜索引擎(SE)最重要的升级之一。最近,通过早期交互的成对文本图像输入的强大表示,Vision-Language(VL)变压器的准确性已经表现优于文本图像检索的现有方法。然而,当使用相同的范例来推理时,VL变压器的效率仍然太低,不能应用于真正的跨模型SE。通过人类学习机制和使用跨模型知识的启发,本文提出了一种新颖的视觉语言分解变压器(VLDEFormer),这大大提高了VL变压器的效率,同时保持了它们的出色准确性。通过所提出的方法,跨模型检索分为两个阶段:VL变压器学习阶段和V​​L分解阶段。后期阶段发挥单一模态索引的作用,这在某种程度上是文本SE的术语索引。该模型从早期交互预训练中学习跨模型知识,然后将其分解为单个编码器。分解只需要监督和达到1000美元+ $倍的小目标数据集,并且少于0.6美元\%平均召回。 VLDEFormer还优于COCO和FLICKR30K的最先进的视觉语义嵌入方法。
translated by 谷歌翻译
随着社交网络的发展,用于各种商业和政治目的的虚假新闻已经大量出现,并在在线世界中广泛存在。有了欺骗性的话,人们可以很容易地被假新闻感染,并会在没有任何事实检查的情况下分享它们。例如,在2016年美国总统选举期间,有关候选人的各种虚假新闻在官方新闻媒体和在线社交网络中都广泛传播。这些假新闻通常会发布以涂抹对手或支持候选人的身边。假新闻中的错误信息通常是为了激励选民的非理性情感和热情。这样的虚假新闻有时会带来毁灭性的影响,改善在线社交网络的信誉的一个重要目标是及时确定假新闻。在本文中,我们建议研究假新闻检测问题。自动假新闻标识非常困难,因为新闻的基于纯模型的事实检查仍然是一个开放问题,并且很少使用现有模型来解决该问题。通过对虚假新闻数据进行彻底的调查,从假新闻中使用的文本单词和图像都可以确定许多有用的明确功能。除了明确的功能外,假新闻中使用的单词和图像中还存在一些隐藏的模式,可以用我们模型中的多个卷积层提取的一组潜在特征来捕获。本文提出了一种称为Ti-CNN的模型(基于文本和图像信息的综合神经网络)。通过将显式和潜在功能投射到统一的特征空间中,Ti-CNN可以同时培训文本和图像信息。在现实世界中的假新闻数据集进行的广泛实验证明了Ti-CNN的有效性。
translated by 谷歌翻译
Deep learning models can achieve high accuracy when trained on large amounts of labeled data. However, real-world scenarios often involve several challenges: Training data may become available in installments, may originate from multiple different domains, and may not contain labels for training. Certain settings, for instance medical applications, often involve further restrictions that prohibit retention of previously seen data due to privacy regulations. In this work, to address such challenges, we study unsupervised segmentation in continual learning scenarios that involve domain shift. To that end, we introduce GarDA (Generative Appearance Replay for continual Domain Adaptation), a generative-replay based approach that can adapt a segmentation model sequentially to new domains with unlabeled data. In contrast to single-step unsupervised domain adaptation (UDA), continual adaptation to a sequence of domains enables leveraging and consolidation of information from multiple domains. Unlike previous approaches in incremental UDA, our method does not require access to previously seen data, making it applicable in many practical scenarios. We evaluate GarDA on two datasets with different organs and modalities, where it substantially outperforms existing techniques.
translated by 谷歌翻译